30 research outputs found

    A multi-modal dance corpus for research into real-time interaction between humans in online virtual environments

    Get PDF
    We present a new, freely available, multimodal corpus for research into, amongst other areas, real-time realistic interaction between humans in online virtual environments. The specific corpus scenario focuses on an online dance class application scenario where students, with avatars driven by whatever 3D capture technology are locally available to them, can learn choerographies with teacher guidance in an online virtual ballet studio. As the data corpus is focused on this scenario, it consists of student/teacher dance choreographies concurrently captured at two different sites using a variety of media modalities, including synchronised audio rigs, multiple cameras, wearable inertial measurement devices and depth sensors. In the corpus, each of the several dancers perform a number of fixed choreographies, which are both graded according to a number of specific evaluation criteria. In addition, ground-truth dance choreography annotations are provided. Furthermore, for unsynchronised sensor modalities, the corpus also includes distinctive events for data stream synchronisation. Although the data corpus is tailored specifically for an online dance class application scenario, the data is free to download and used for any research and development purposes

    Enhanced visualisation of dance performance from automatically synchronised multimodal recordings

    Get PDF
    The Huawei/3DLife Grand Challenge Dataset provides multimodal recordings of Salsa dancing, consisting of audiovisual streams along with depth maps and inertial measurements. In this paper, we propose a system for augmented reality-based evaluations of Salsa dancer performances. An essential step for such a system is the automatic temporal synchronisation of the multiple modalities captured from different sensors, for which we propose efficient solutions. Furthermore, we contribute modules for the automatic analysis of dance performances and present an original software application, specifically designed for the evaluation scenario considered, which enables an enhanced dance visualisation experience, through the augmentation of the original media with the results of our automatic analyses

    An advanced virtual dance performance evaluator

    Get PDF
    The ever increasing availability of high speed Internet access has led to a leap in technologies that support real-time realistic interaction between humans in online virtual environments. In the context of this work, we wish to realise the vision of an online dance studio where a dance class is to be provided by an expert dance teacher and to be delivered to online students via the web. In this paper we study some of the technical issues that need to be addressed in this challenging scenario. In particular, we describe an automatic dance analysis tool that would be used to evaluate a student's performance and provide him/her with meaningful feedback to aid improvement

    Kinect vs. low-cost inertial sensing For gesture recognition

    Get PDF
    In this paper, we investigate efficient recognition of human gestures / movements from multimedia and multimodal data, including the Microsoft Kinect and translational and rotational acceleration and velocity from wearable inertial sensors. We firstly present a system that automatically classifies a large range of activities (17 different gestures) using a random forest decision tree. Our system can achieve near real time recognition by appropriately selecting the sensors that led to the greatest contributing factor for a particular task. Features extracted from multimodal sensor data were used to train and evaluate a customized classifier. This novel technique is capable of successfully classifying var- ious gestures with up to 91 % overall accuracy on a publicly available data set. Secondly we investigate a wide range of different motion capture modalities and compare their results in terms of gesture recognition accu- racy using our proposed approach. We conclude that gesture recognition can be effectively performed by considering an approach that overcomes many of the limitations associated with the Kinect and potentially paves the way for low-cost gesture recognition in unconstrained environments

    Multimodal human motion capture and synthesis

    Get PDF
    Human motion capture (HMC) involves the sensing, recording and mapping of human motion to a digital model. It is useful for many commercial applications and fields of study, including digital animation, virtual reality/gaming, biomechanical/clinical studies and sports performance analysis. What follows is an overview of the current research being carried out in Insight

    Towards automatic activity classification and movement assessment during a sports training session

    Get PDF
    Abstract—Motion analysis technologies have been widely used to monitor the potential for injury and enhance athlete perfor- mance. However, most of these technologies are expensive, can only be used in laboratory environments and examine only a few trials of each movement action. In this paper, we present a novel ambulatory motion analysis framework using wearable inertial sensors to accurately assess all of an athlete’s activities in real training environment. We firstly present a system that automatically classifies a large range of training activities using the Discrete Wavelet Transform (DWT) in conjunction with a Random forest classifier. The classifier is capable of successfully classifying various activities with up to 98% accuracy. Secondly, a computationally efficient gradient descent algorithm is used to estimate the relative orientations of the wearable inertial sensors mounted on the shank, thigh and pelvis of a subject, from which the flexion-extension knee and hip angles are calculated. These angles, along with sacrum impact accelerations, are automatically extracted for each stride during jogging. Finally, normative data is generated and used to determine if a subject’s movement technique differed to the normative data in order to identify potential injury related factors. For the joint angle data this is achieved using a curve-shift registration technique. It is envisaged that the proposed framework could be utilized for accurate and automatic sports activity classification and reliable movement technique evaluation in various unconstrained environments for both injury management and performance enhancement

    Automatic activity classification and movement assessment during a sports training session using wearable inertial sensors

    Get PDF
    Motion analysis technologies have been widely used to monitor the potential for injury and enhance athlete performance. However, most of these technologies are expensive, can only be used in laboratory environments and examine only a few trials of each movement action. In this paper, we present a novel ambulatory motion analysis framework using wearable inertial sensors to accurately assess all of an athlete’s activities in an outdoor training environment. We ïŹrstly present a system that automatically classifies a large range of training activities using the Discrete Wavelet Transform (DWT) in conjunction with a Random forest classifier. The classifier is capable of successfully classifying various activities with up to 98% accuracy. Secondly, a computationally efficient gradient descent algorithm is used to estimate the relative orientations of the wearable inertial sensors mounted on the thigh and shank of a subject, from which the ïŹ‚exion-extension knee angle is calculated. Finally, a curve shift registration technique is applied to both generate normative data and determine if a subject’s movement technique differed to the normative data in order to identify potential injury related factors. It is envisaged that the proposed framework could be utilized for accurate and automatic sports activity classification and reliable movement technique evaluation in various unconstrained environments

    Treatment of Opioid-Use Disorders

    Full text link

    Advancements and challenges towards a collaborative framework for 3D tele-immersive social networking

    Get PDF
    Social experiences realized through teleconferencing systems are still quite different from face to face meetings. The awareness that we are online and in a, to some extent, lesser real world are preventing us from really engaging and enjoying the event. Several reasons account for these differences and have been identified. We think it is now time to bridge these gaps and propose inspiring and innovative solutions in order to provide realistic, believable and engaging online experiences. We present a distributed and scalable framework named REVERIE that faces these challenges and provides a mix of these solutions. Applications built on top of the framework will be able to provide interactive, truly immersive, photo-realistic experiences to a multitude of users that for them will feel much more similar to having face to face meetings than the experience offered by conventional teleconferencing systems

    A multi-modal dance corpus for research into interaction between humans in virtual environments

    Get PDF
    We present a new, freely available, multimodal corpus for research into, amongst other areas, real-time realistic interaction between humans in online virtual environments. The specific corpus scenario focuses on an online dance class application scenario where students, with avatars driven by whatever 3D capture technology is locally available to them, can learn choreographies with teacher guidance in an online virtual dance studio. As the dance corpus is focused on this scenario, it consists of student/teacher dance choreographies concurrently captured at two different sites using a variety of media modalities, including synchronised audio rigs, multiple cameras, wearable inertial measurement devices and depth sensors. In the corpus, each of the several dancers performs a number of fixed choreographies, which are graded according to a number of specific evaluation criteria. In addition, ground-truth dance choreography annotations are provided. Furthermore, for unsynchronised sensor modalities, the corpus also includes distinctive events for data stream synchronisation. The total duration of the recorded content is 1 h and 40 min for each single sensor, amounting to 55 h of recordings across all sensors. Although the dance corpus is tailored specifically for an online dance class application scenario, the data is free to download and use for any research and development purposes
    corecore